Recently, the success of pre-training in text domain has been fully extended to vision, audio, and cross-modal scenarios. The proposed pre-training models of different modalities are showing a rising trend of homogeneity in their model structures, which brings the opportunity to implement different pre-training models within a uniform framework. In this paper, we present TencentPretrain, a toolkit supporting pre-training models of different modalities. The core feature of TencentPretrain is the modular design. The toolkit uniformly divides pre-training models into 5 components: embedding, encoder, target embedding, decoder, and target. As almost all of common modules are provided in each component, users can choose the desired modules from different components to build a complete pre-training model. The modular design enables users to efficiently reproduce existing pre-training models or build brand-new one. We test the toolkit on text, vision, and audio benchmarks and show that it can match the performance of the original implementations.
translated by 谷歌翻译
Crowd counting is usually handled in a density map regression fashion, which is supervised via a L2 loss between the predicted density map and ground truth. To effectively regulate models, various improved L2 loss functions have been proposed to find a better correspondence between predicted density and annotation positions. In this paper, we propose to predict the density map at one resolution but measure the density map at multiple resolutions. By maximizing the posterior probability in such a setting, we obtain a log-formed multi-resolution L2-difference loss, where the traditional single-resolution L2 loss is its particular case. We mathematically prove it is superior to a single-resolution L2 loss. Without bells and whistles, the proposed loss substantially improves several baselines and performs favorably compared to state-of-the-art methods on four crowd counting datasets, ShanghaiTech A & B, UCF-QNRF, and JHU-Crowd++.
translated by 谷歌翻译
复杂的流量分析,例如加密的流量分析和未知的恶意软件检测,强调需要进行高级方法来分析网络流量。使用固定模式,签名匹配和检测网络流量中已知模式的规则的传统方法已被AI(人工智能)驱动算法取代。但是,缺乏高性能AI网络特定的框架使得不可能在网络工作负载中部署基于AI的实时处理。在本文中,我们描述了流量分析开发工具包(TADK)的设计,这是一个针对基于AI的网络工作负载处理的行业标准框架。 TADK可以在数据中心到边缘的网络设备中基于实时的AI网络工作负载处理,而无需专门硬件(例如GPU,神经处理单元等)。我们已经在商品WAF和5G UPF中部署了TADK,评估结果表明,Tadk可以在流量功能提取时达到每个核心最多35.3Gbps的吞吐量,每核6.5Gbps在流量分类中,并且可以减少SQLI/XSS检测到下降至4.5us每个请求的精度比固定模式解决方案更高。
translated by 谷歌翻译
由于诸如CNN等尖端技术的应用,未经监测视频中的事件分析引起了不断的关注。作为基于CNN的模型的良好研究的属性,接收领域是用于测量由单个特征响应覆盖的空间范围的测量,这对于提高图像分类精度是至关重要的。在视频域中,实际上通过不同概念之间的复杂交互描述了视频事件语义,而他们的行为从一个视频差异差异,导致基于概念的分析难以准确的事件分类。为了模拟概念行为,我们研究基于概念的事件表示的时间概念接受领域,其编码不同中级概念的时间发生模式。因此,我们介绍了时间动态卷积(TDC),为基于概念的事件分析提供了更强的灵活性。 TDC可以根据不同的输入动态调整时间概念接收字段大小。值得注意的是,学习一组系数以使多个卷积的结果融合,具有提供各种时间概念接收场大小的不同内核宽度。根据输入视频并突出至关重要的概念,不同的系数可以产生适当和准确的时间概念接收场大小。基于TDC,我们提出了时间动态概念建模网络(TDCMN)来学习有效的未经监测视频分析的准确和完整的概念表示。 FCVID和ActivityNet上的实验结果表明,TDCMN在不同的输入上展示了适应性事件识别能力,并通过大边距提高基于概念的方法的事件识别性能。代码可在https://github.com/qzhb/tdcmn获得。
translated by 谷歌翻译
Scene text recognition (STR) enables computers to recognize and read the text in various real-world scenes. Recent STR models benefit from taking linguistic information in addition to visual cues into consideration. We propose a novel Masked Vision-Language Transformers (MVLT) to capture both the explicit and the implicit linguistic information. Our encoder is a Vision Transformer, and our decoder is a multi-modal Transformer. MVLT is trained in two stages: in the first stage, we design a STR-tailored pretraining method based on a masking strategy; in the second stage, we fine-tune our model and adopt an iterative correction method to improve the performance. MVLT attains superior results compared to state-of-the-art STR models on several benchmarks. Our code and model are available at https://github.com/onealwj/MVLT.
translated by 谷歌翻译
标签感建议是通过标记行为预测用户个性化项目的任务。对于具有Last.FM或Movielens等标记功能的许多应用程序至关重要。最近,许多努力致力于通过图形卷积网络(GCN)改进引人注目的推荐系统(TRS),这已成为一般建议的新最新技术。但是,某些解决方案是直接从GCN继承而没有理由的,这很难缓解标签引入的稀疏性,模棱两可和冗余问题,从而增加了培训和退化建议性能的困难。在这项工作中,我们旨在简化GCN的设计,以使其更简洁。我们提出了一个新颖的标签推荐模型,名为Light Folksonomy图协作滤波(LFGCF),该模型仅包括必需的GCN组件。具体而言,LFGCF首先从用户分配标签和项目标记的用户记录中构造了人们图形。然后,我们利用汇总的简单设计来学习人们对人物学图的高级表示形式,并使用在多个层中学习的嵌入的加权总和进行信息更新。我们共享标签嵌入,以弥合用户和项目之间的信息差距。此外,提出了一个名为Transrt的正规化功能,以更好地描述用户的偏好和项目功能。对三个现实世界数据集的广泛超参数实验和消融研究表明,LFGCF使用的参数较少,并且显着优于大多数基线的Tag-Aware Top-N建议。
translated by 谷歌翻译
近年来,人群计数已成为计算机视觉中的重要问题。在大多数方法中,密度图是通过从地面图中与人头中心标记的地面图图中的高斯内核进行卷积而产生的。由于CNN中的固定几何结构和模糊的头尺度信息,因此无法完全获得头部特征。提出了可变形的卷积来利用头部中CNN特征的尺度自适应能力。通过学习采样点的坐标偏移,可以提高调整接受场的能力。但是,头部在可变形卷积中的采样点并不统一,从而导致头部信息丢失。为了处理不均匀的采样,在本文中提出了改进的规范性卷积(\ textit {i.e。受NDLOSS限制的采样点的偏移往往更加均匀。然后,更完整地获得了头部中的功能,从而获得更好的性能。尤其是,拟议的NDCONV是一个轻巧的模块,与可变形卷积具有相似的计算负担。在广泛的实验中,我们的方法优于上海A,Shanghaitech B,UCF \ _QNRF和UCF \ _CC \ _50数据集,分别实现61.4、7.8、91.2和167.2 MAE。该代码可从https://github.com/bingshuangzhuzi/ndconv获得
translated by 谷歌翻译
过度平滑是一个具有挑战性的问题,这会降低深图卷积网络(GCNS)的性能。然而,用于缓解过度平滑问题的现有研究缺乏一般性或有效性。在本文中,我们分析了过度平滑问题背后的潜在问题,即特征 - 多样性退化,梯度消失和模型重量衰减。灵感来自于此,我们提出了一个简单而有效的即插即用模块,速度,缓解过度平滑。具体地,对于GCN模型的每个中间层,随机地(或基于节点度)选择节点以通过直接向非线性函数馈送它们的输入特征来跳过卷积操作。分析,1)跳过卷积操作可以防止特征失去多样性; 2)“跳过”节点使能梯度直接传递回来,从而减轻梯度消失和模型权重过腐蚀问题。为了展示Skipnode的优越性,我们对九个流行的数据集进行了广泛的实验,包括同性恋和异化图,在两个典型的任务上具有不同的图表大小:节点分类和链路预测。具体而言,1)SkipNode具有适应不同数据集和任务的各种基于GCN的模型的普遍性。 2)Skipnode优于最近最先进的反平滑插头 - 播放模块,即DropEdge和Dropnode,在不同的设置中。代码将在GitHub上公开提供。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译